2,973 research outputs found

    The DNA Binding Properties of the Parsley bZIP Transcription Factor CPRF4a Are Regulated by Light

    Get PDF
    The common plant regulatory factors (CPRFs) from parsley are transcription factors with a basic leucine zipper motif that bind to cis-regulatory elements frequently found in promoters of light-regulated genes. Recent studies have revealed that certain CPRF proteins are regulated in response to light by changes in their expression level and in their intracellular localization. Here, we describe an additional mechanism contributing to the light-dependent regulation of CPRF proteins. We show that the DNA binding activity of the factor CPRF4a is modulated in a phosphorylation-dependent manner and that cytosolic components are involved in the regulation of this process. Moreover, we have identified a cytosolic kinase responsible for CPRF4a phosphorylation. Modification of recombinant CPRF4a by this kinase, however, is insufficient to cause a full activation of the factor, suggesting that additional modifications are required. Furthermore, we demonstrate that the DNA binding activity of the factor is modified upon light treatment. The results of additional irradiation experiments suggest that this photoresponse is controlled by different photoreceptor systems. We discuss the possible role of CPRF4a in light signal transduction as well as the emerging regulatory network controlling CPRF activities in parsley

    Divergence of predictive model output as indication of phase transitions

    Full text link
    We introduce a new method to identify phase boundaries in physical systems. It is based on training a predictive model such as a neural network to infer a physical system's parameters from its state. The deviation of the inferred parameters from the underlying correct parameters will be most susceptible and diverge maximally in the vicinity of phase boundaries. Therefore, peaks in the divergence of the model's predictions are used as indication of phase transitions. Our method is applicable for phase diagrams of arbitrary parameter dimension and without prior information about the phases. Application to both the two-dimensional Ising model and the dissipative Kuramoto-Hopf model show promising results.Comment: 6 pages, 3 figure

    High Density Quark Matter and the Renormalization Group in QCD with two and three flavors

    Get PDF
    We consider the most general four fermion operators in QCD for two and three massless flavors and study their renormalization in the vicinity of the Fermi surface. We show that, asymptotically, the largest coupling corresponds to scalar diquark condensation. Asymptotically the direct and iterated (molecular) instanton interactions become equal. We provide simple arguments for the form of the operators that diagonalize the evolution equations. Some solutions of the flow equations exhibit instabilities arising out of purely repulsive interactions.Comment: 13 pages, 2 figure

    Replacing Neural Networks by Optimal Analytical Predictors for the Detection of Phase Transitions

    Get PDF
    Identifying phase transitions and classifying phases of matter is central to understanding the properties and behavior of a broad range of material systems. In recent years, machine-learning (ML) techniques have been successfully applied to perform such tasks in a data-driven manner. However, the success of this approach notwithstanding, we still lack a clear understanding of ML methods for detecting phase transitions, particularly of those that utilize neural networks (NNs). In this work, we derive analytical expressions for the optimal output of three widely used NN-based methods for detecting phase transitions. These optimal predictions correspond to the results obtained in the limit of high model capacity. Therefore, in practice, they can, for example, be recovered using sufficiently large, well-trained NNs. The inner workings of the considered methods are revealed through the explicit dependence of the optimal output on the input data. By evaluating the analytical expressions, we can identify phase transitions directly from experimentally accessible data without training NNs, which makes this procedure favorable in terms of computation time. Our theoretical results are supported by extensive numerical simulations covering, e.g., topological, quantum, and many-body localization phase transitions. We expect similar analyses to provide a deeper understanding of other classification tasks in condensed matter physics

    Unsupervised identification of topological order using predictive models

    Full text link
    Machine-learning driven models have proven to be powerful tools for the identification of phases of matter. In particular, unsupervised methods hold the promise to help discover new phases of matter without the need for any prior theoretical knowledge. While for phases characterized by a broken symmetry, the use of unsupervised methods has proven to be successful, topological phases without a local order parameter seem to be much harder to identify without supervision. Here, we use an unsupervised approach to identify topological phases and transitions out of them. We train artificial neural nets to relate configurational data or measurement outcomes to quantities like temperature or tuning parameters in the Hamiltonian. The accuracy of these predictive models can then serve as an indicator for phase transitions. We successfully illustrate this approach on both the classical Ising gauge theory as well as on the quantum ground state of a generalized toric code.Comment: 12 pages, 13 figure

    Fast Detection of Phase Transitions with Multi-Task Learning-by-Confusion

    Full text link
    Machine learning has been successfully used to study phase transitions. One of the most popular approaches to identifying critical points from data without prior knowledge of the underlying phases is the learning-by-confusion scheme. As input, it requires system samples drawn from a grid of the parameter whose change is associated with potential phase transitions. Up to now, the scheme required training a distinct binary classifier for each possible splitting of the grid into two sides, resulting in a computational cost that scales linearly with the number of grid points. In this work, we propose and showcase an alternative implementation that only requires the training of a single multi-class classifier. Ideally, such multi-task learning eliminates the scaling with respect to the number of grid points. In applications to the Ising model and an image dataset generated with Stable Diffusion, we find significant speedups that closely correspond to the ideal case, with only minor deviations.Comment: 7 pages, 3 figures, Machine Learning and the Physical Sciences Workshop, NeurIPS 202

    Two UD Alumni Contribute $900,000 for Scholarships

    Get PDF
    News release announces that two alumni have given anonymous gifts totaling $900,000 to establish endowed scholarship funds; one is for undergraduate scholarships; the other is for scholarships for black graduates of Dayton-area Catholic high schools
    • …
    corecore